How to Get Started With Responsible AI

0

In my previous article, “The Vital Ingredients of Responsible AI,” I described the principles that underpin the need to develop AI systems that factor in the human factor, not only contribute to business outcomes but also protect individuals, society and the environment.

While it’s difficult to argue with those principles, putting them into practice is far more complex. As is often the case, technology is the easy part. Organizations should also set up a governance framework to establish the necessary rules, standards, safeguards and processes in the end-to-end analytics life cycle, from data to decisions. There should be a proactive strategy for this and a focus on fostering a culture of responsibility, collectively and individually.

While the task might seem daunting, there are considerations that early adopters have found useful when getting started with responsible AI.

1. Contextualize the principles

Once you’ve defined your own version of responsible AI principles in line with your organization’s values and priorities, the first challenge will be to translate these principles into practical guidelines that you can communicate to all the people involved in developing, deploying and using AI systems. You need to contextualize them because a single principle might not apply in the same way depending on the industry you’re operating in, the country where the solution is deployed, the culture of your own organization and the intended use of the AI application.

For example, using the gender variable in health care might be critical to support the analysis and diagnosis of a medical condition. But using gender to approve a loan is generally not acceptable. The level of transparency you need to make the next-best offer on a commercial website is not the same as the level you need for other things. It might differ for screening a résumé in a hiring process, flagging potential fraud in social benefits claims or calculating the likelihood of a convicted criminal to re-offend.

The level of security you need for an AI system used for weather forecasting is not the same as for processing highly sensitive data for military applications. You must translate your principles into specific guidelines. And you need to establish some methodology to evaluate the ethical implications and the risks.

2. Embed responsible AI as part of your data and analytics strategy

The second recommendation is to avoid handling responsible AI as a topic on its own. You can't separate responsible AI from your data and analytics strategy. And you should look to embed responsible AI principles into your existing digital transformation programs.

Who should own responsible AI? There is currently hype about a new profession, the "AI ethicist.” Which, on the paper, is some kind of unicorn, able to master a vast arsenal of AI tools and technologies, with enough experience to understand the business and the industry, able to identify the specific AI ethical traps that exist in them, someone with great communication skills and the ability to work across organizational boundaries, with extensive regulatory, legal and policy knowledge, etc. In practice, you’re almost certain not to find one!

The good news is that you probably don’t need one because you will be better served with a team effort. Different people can bring their specific skills, and diversity will safeguard against individual biases.

Although you probably don’t need an AI ethicist, you should consider appointing a small team to lead this effort. It can orchestrate the operationalization of responsible AI principles across departments and at every stage of the AI life cycle. I would also recommend that this team report at the highest level possible on the business side of your organization. IT and analytics teams have some conflicts of interest and might not be the right places to host this responsibility.

Pitfalls to avoid

Avoid establishing your ethics team in the legal department. This would make responsible AI a pure compliance and liability issue. And neither should data scientists be the sole guardians or stewards of responsible AI requirements. Data scientists have a role to play in designing and developing predictive models that are transparent, explainable and deployable. But they only own a small part of the end-to-end AI life cycle.

Some argue that data science should become a regulated profession. Or that data scientists should pledge to some kind of oath, similar to the Hippocratic Oath taken by doctors. Although I can see the good intentions behind this, I’m not sure it would be practical. And I think it’s only looking at responsible AI through the lens of data science when it is, in fact, a much broader issue.

3. Infuse responsible AI principles at every stage of the AI life cycle

As I just discussed, data science is only a part of the end-to-end AI life cycle. You should look to infuse responsible AI principles at every step, from the outset and all the way through, with the appropriate checklists and safeguards. This starts with the inception and design of your AI system. Evaluate the ethical risks. And define mitigation actions and success criteria.

Data collection: During data collection, you should collect the explicit consent of individuals to use their data and protect their sensitive attributes. You still want to collect some of that personal and/or sensitive data so you can measure and mitigate bias related to those variables later on. You should flag sensitive variables where you store the data, right alongside the consent information.

Data preparation and analysis: During data preparation and analysis, evaluate bias. Your training data should be representative of the population the algorithm is designed to be used for. You might want to consider using synthetic data to protect the privacy and add more diversity to data sets to reduce bias in the training data.

Modeling and testing: During modeling and testing, there are some obvious things to do around the documentation of the code and the use of explainability algorithmic techniques, such as PD, ICE, LIME and SHAP. But my main recommendation is to keep things as simple as possible. Machine learning is not the answer to everything, and you need to balance the need for accuracy with transparency and explainability. The incremental benefit of using machine learning is sometimes not worth the effort to build and deploy those models and the trade-offs in terms of explainability. So when a simple linear regression is good enough, just go for it!

Development: During the deployment of your analytical models and AI applications, the key is to use automated, predictable and repeatable processes, for instance, CI/CD (continuous integration/continuous delivery) pipelines. It’s critical to also capture lineage information along the way. This lets you trace what data was used to train models, what models were used to make predictions and how predictions were used. Plus some business logic to decide and see its outcome and impact.

Production: Once an AI application is in production, just assume it is biased and then try to disprove it. You need to proactively monitor bias in the output of the models and the decisions it makes. Establish a system of KPIs to share with various stakeholders in ways they can understand and some bias and fairness dashboards.

4. Leverage your ModelOps framework

Depending on how advanced you are with your analytics, you might have reached the point where the challenge is not to explore the realm of possibilities and experiment with new techniques anymore. You may be ready to operationalize your analytics and bring analytical insight or the output of predictive models into production at the front line of the business where actions are taken, decisions made and business outcomes realized.

To do this effectively, you need to industrialize the end-to-end life cycle, speeding up the deployment process and producing reliable and consistent outputs in a governed environment, to ultimately scale analytics-driven decisioning as part of your digital transformation. The methodology to do this is called ModelOps (or MLOps), which involves implementing a formal governance model for AI and automated deployment and monitoring processes.

What’s interesting is that the same governance framework and its underlying capabilities have the potential to also support many of the requirements of responsible AI. In effect, with ModelOps, you can kill three birds with one stone, if you allow me the expression. You can scale your analytics, you can operationalize your analytics, and you can do it responsibly.

5. Don’t wait, start today!

My final recommendation is to not wait. Don’t wait for regulations to come into effect. Implementing responsible AI principles cannot be done as an afterthought. Reworking them into existing AI applications will cost much more than embedding them. You would probably have to rebuild everything and face an uphill battle to change behaviours and processes. The journey starts now!

If you want to discover more about this topic, watch the on-demand webinar Accelerate Innovation With Responsible AI by registering here.

Share

About Author

Olivier Penel

Advisory Business Solutions Manager

With a long-lasting (and quite obsessive) passion for data, Olivier Penel strives to help organizations make the most of data, comply with data-driven regulations, fuel innovation with analytics, and create value from their most valuable asset: data. As a global leader at SAS for everything data management and privacy-related, Penel enjoys providing strategic guidance, and sharing best practices and experiences in using data governance and analytics as a catalyst for digital transformation.

Leave A Reply

Back to Top